50 research outputs found
Scale space consistency of piecewise constant least squares estimators -- another look at the regressogram
We study the asymptotic behavior of piecewise constant least squares
regression estimates, when the number of partitions of the estimate is
penalized. We show that the estimator is consistent in the relevant metric if
the signal is in , the space of c\`{a}dl\`{a}g functions equipped
with the Skorokhod metric or equipped with the supremum metric.
Moreover, we consider the family of estimates under a varying smoothing
parameter, also called scale space. We prove convergence of the empirical scale
space towards its deterministic target.Comment: Published at http://dx.doi.org/10.1214/074921707000000274 in the IMS
Lecture Notes Monograph Series
(http://www.imstat.org/publications/lecnotes.htm) by the Institute of
Mathematical Statistics (http://www.imstat.org
A local maximal inequality under uniform entropy
Abstract: We derive an upper bound for the mean of the supremum of the empirical process indexed by a class of functions that are known to have variance bounded by a small constant δ. The bound is expressed in the uniform entropy integral of the class at δ. The bound yields a rate of convergence of minimum contrast estimators when applied to the modulus of continuity of the contrast functions
Semiparametric Regression Analysis of Panel Count Data: A Practical Review
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/149207/1/insr12271_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/149207/2/insr12271.pd
Optimal Concentration of Information Content For Log-Concave Densities
An elementary proof is provided of sharp bounds for the varentropy of random
vectors with log-concave densities, as well as for deviations of the
information content from its mean. These bounds significantly improve on the
bounds obtained by Bobkov and Madiman ({\it Ann. Probab.}, 39(4):1528--1543,
2011).Comment: 15 pages. Changes in v2: Remark 2.5 (due to C. Saroglou) added with
more general sufficient conditions for equality in Theorem 2.3. Also some
minor corrections and added reference
D-optimal designs via a cocktail algorithm
A fast new algorithm is proposed for numerical computation of (approximate)
D-optimal designs. This "cocktail algorithm" extends the well-known vertex
direction method (VDM; Fedorov 1972) and the multiplicative algorithm (Silvey,
Titterington and Torsney, 1978), and shares their simplicity and monotonic
convergence properties. Numerical examples show that the cocktail algorithm can
lead to dramatically improved speed, sometimes by orders of magnitude, relative
to either the multiplicative algorithm or the vertex exchange method (a variant
of VDM). Key to the improved speed is a new nearest neighbor exchange strategy,
which acts locally and complements the global effect of the multiplicative
algorithm. Possible extensions to related problems such as nonparametric
maximum likelihood estimation are mentioned.Comment: A number of changes after accounting for the referees' comments
including new examples in Section 4 and more detailed explanations throughou